ethical issue
The Ethics of Generative AI
This chapter discusses the ethics of generative AI. It provides a technical primer to show how generative AI affords experiencing technology as if it were human, and this affordance provides a fruitful focus for the philosophical ethics of generative AI. It then shows how generative AI can both aggravate and alleviate familiar ethical concerns in AI ethics, including responsibility, privacy, bias and fairness, and forms of alienation and exploitation. Finally, the chapter examines ethical questions that arise specifically from generative AI's mimetic generativity, such as debates about authorship and credit, the emergence of as-if social relationships with machines, and new forms of influence, persuasion, and manipulation.
- Summary/Review (0.54)
- Research Report (0.50)
Trust, Governance, and AI Decision Making
IBM's Global Leader on Responsible AI and AI Governance, Francesca Rossi, arrived at her current area of focus after a 2014 sabbatical at the Harvard Radcliffe Institute, which inspired her to think beyond her training as an academic researcher and incorporate both humanistic and technological perspectives into the development of AI systems. In the intervening years, she helped build IBM's internal AI Ethics Board and foster external partnerships to shape best practices for responsible AI. Here, we talk about trust, governance, and what these issues have to do with AI decision making. The ethical issues around the use of AI evolved with the technology's capabilities. Traditional machine learning approaches introduced issues like fairness, explainability, privacy, transparency, and so on.
Diverse Human Value Alignment for Large Language Models via Ethical Reasoning
Wang, Jiahao, Xue, Songkai, Li, Jinghui, Wang, Xiaozhen
Ensuring that Large Language Models (LLMs) align with the diverse and evolving human values across different regions and cultures remains a critical challenge in AI ethics. Current alignment approaches often yield superficial conformity rather than genuine ethical understanding, failing to address the complex, context-dependent nature of human values. In this paper, we propose a novel ethical reasoning paradigm for LLMs inspired by well-established ethical decision-making models, aiming at enhancing diverse human value alignment through deliberative ethical reasoning. Our framework consists of a structured five-step process, including contextual fact gathering, hierarchical social norm identification, option generation, multiple-lens ethical impact analysis, and reflection. This theory-grounded approach guides LLMs through an interpretable reasoning process that enhances their ability to understand regional specificities and perform nuanced ethical analysis, which can be implemented with either prompt engineering or supervised fine-tuning methods. We perform evaluations on the SafeWorld benchmark that specially designed for regional value alignment. Experimental results demonstrate our framework significantly improves LLM alignment with diverse human values compared to baseline methods, enabling more accurate social norm identification and more culturally appropriate reasoning. Our work provides a concrete pathway toward developing LLMs that align more effectively with the multifaceted values of global societies through interdisciplinary research.
Future Aspects in Human Action Recognition: Exploring Emerging Techniques and Ethical Influences
Gasteratos, Antonios, Moutsis, Stavros N., Tsintotas, Konstantinos A., Aloimonos, Yiannis
Visual-based human action recognition can be found in various application fields, e.g., surveillance systems, sports analytics, medical assistive technologies, or human-robot interaction frameworks, and it concerns the identification and classification of individuals' activities within a video. Since actions typically occur over a sequence of consecutive images, it is particularly challenging due to the inclusion of temporal analysis, which introduces an extra layer of complexity. However, although multiple approaches try to handle temporal analysis, there are still difficulties because of their computational cost and lack of adaptability. Therefore, different types of vision data, containing transition information between consecutive images, provided by next-generation hardware sensors will guide the robotics community in tackling the problem of human action recognition. On the other hand, while there is a plethora of still-image datasets, that researchers can adopt to train new artificial intelligence models, videos representing human activities are of limited capabilities, e.g., small and unbalanced datasets or selected without control from multiple sources. To this end, generating new and realistic synthetic videos is possible since labeling is performed throughout the data creation process, while reinforcement learning techniques can permit the avoidance of considerable dataset dependence. At the same time, human factors' involvement raises ethical issues for the research community, as doubts and concerns about new technologies already exist.
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- South America > Uruguay > Maldonado > Maldonado (0.05)
- Europe > Greece (0.05)
- Information Technology (0.35)
- Health & Medicine (0.35)
GPT versus Humans: Uncovering Ethical Concerns in Conversational Generative AI-empowered Multi-Robot Systems
Rousi, Rebekah, Makitalo, Niko, Samani, Hooman, Kemell, Kai-Kristian, de Cerqueira, Jose Siqueira, Vakkuri, Ville, Mikkonen, Tommi, Abrahamsson, Pekka
The emergence of generative artificial intelligence (GAI) and large language models (LLMs) such ChatGPT has enabled the realization of long-harbored desires in software and robotic development. The technology however, has brought with it novel ethical challenges. These challenges are compounded by the application of LLMs in other machine learning systems, such as multi-robot systems. The objectives of the study were to examine novel ethical issues arising from the application of LLMs in multi-robot systems. Unfolding ethical issues in GPT agent behavior (deliberation of ethical concerns) was observed, and GPT output was compared with human experts. The article also advances a model for ethical development of multi-robot systems. A qualitative workshop-based method was employed in three workshops for the collection of ethical concerns: two human expert workshops (N=16 participants) and one GPT-agent-based workshop (N=7 agents; two teams of 6 agents plus one judge). Thematic analysis was used to analyze the qualitative data. The results reveal differences between the human-produced and GPT-based ethical concerns. Human experts placed greater emphasis on new themes related to deviance, data privacy, bias and unethical corporate conduct. GPT agents emphasized concerns present in existing AI ethics guidelines. The study contributes to a growing body of knowledge in context-specific AI ethics and GPT application. It demonstrates the gap between human expert thinking and LLM output, while emphasizing new ethical concerns emerging in novel technology.
- Oceania > Australia (0.28)
- North America > United States > Arizona (0.04)
- Europe > Switzerland (0.04)
- (8 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- (2 more...)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.84)
The ethical landscape of robot-assisted surgery. A systematic review
Haltaufderheide, Joschka, Pfisterer-Heise, Stefanie, Pieper, Dawid, Ranisch, Robert
Background: Robot-assisted surgery has been widely adopted in recent years. However, compared to other health technologies operating in close proximity to patients in a vulnerable state, ethical issues of robot-assisted surgery have received less attention. Against the background of increasing automation that are expected to raise new ethical issues, this systematic review aims to map the state of the ethical debate in this field. Methods: A protocol was registered in the international prospective register of systematic reviews (PROSPERO CRD42023397951). Medline via PubMed, EMBASE, CINHAL, Philosophers' Index, IEEE Xplorer, Web of Science (Core Collection), Scopus and Google Scholar were searched in January 2023. Screening, extraction, and analysis were conducted independently by two authors. A qualitative narrative synthesis was performed. Results: Out of 1,723 records, 66 records were included in the final dataset. Seven major strands of the ethical debate emerged during analysis. These include questions of harms and benefits, responsibility and control, professional-patient relationship, ethical issues in surgical training and learning, justice, translational questions, and economic considerations. Discussion: The identified themes testify to a broad range of different and differing ethical issues requiring careful deliberation and integration into the surgical ethos. Looking forward, we argue that a different perspective in addressing robotic surgical devices might be helpful to consider upcoming challenges of automation.
- Europe > Germany > Brandenburg > Potsdam (0.04)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (3 more...)
- Overview (1.00)
- Research Report > Experimental Study (0.68)
- Research Report > New Finding (0.46)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Surgery (1.00)
- (6 more...)
Distribution of Responsibility During the Usage of AI-Based Exoskeletons for Upper Limb Rehabilitation
Huaxi, null, Zhang, null, Fontaine, Melanie, Huchard, Marianne, Mereaux, Baptiste, Remy-Neris, Olivier
The ethical issues concerning the AI-based exoskeletons used in healthcare have already been studied literally rather than technically. How the ethical guidelines can be integrated into the development process has not been widely studied. However, this is one of the most important topics which should be studied more in real-life applications. Therefore, in this paper we highlight one ethical concern in the context of an exoskeleton used to train a user to perform a gesture: during the interaction between the exoskeleton, patient and therapist, how is the responsibility for decision making distributed? Based on the outcome of this, we will discuss how to integrate ethical guidelines into the development process of an AI-based exoskeleton. The discussion is based on a case study: AiBle. The different technical factors affecting the rehabilitation results and the human-machine interaction for AI-based exoskeletons are identified and discussed in this paper in order to better apply the ethical guidelines during the development of AI-based exoskeletons.
- Europe > France > Occitanie > Hérault > Montpellier (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > France > Brittany > Finistère > Brest (0.04)
The Good Robot Podcast: Featuring Maurice Chiodo
Hosted by Eleanor Drage and Kerry Mackereth, The Good Robot is a podcast which explores the many complex intersections between gender, feminism and technology. We often think that maths is neutral or can't be harmful, because after all, what could numbers do to hurt us? In this episode, we talk to Dr Maurice Chiodo, a mathematician at the University of Cambridge, who's now based at the Centre for Existential Risk. He tells us why maths can actually throw out big ethical issues. Take the atomic bomb or the maths used by Cambridge Analytica to influence the Brexit referendum or the US elections.
Defining and Detecting Vulnerability in Human Evaluation Guidelines: A Preliminary Study Towards Reliable NLG Evaluation
Ruan, Jie, Wang, Wenqing, Wan, Xiaojun
Human evaluation serves as the gold standard for assessing the quality of Natural Language Generation (NLG) systems. Nevertheless, the evaluation guideline, as a pivotal element ensuring reliable and reproducible human assessment, has received limited attention.Our investigation revealed that only 29.84% of recent papers involving human evaluation at top conferences release their evaluation guidelines, with vulnerabilities identified in 77.09% of these guidelines. Unreliable evaluation guidelines can yield inaccurate assessment outcomes, potentially impeding the advancement of NLG in the right direction. To address these challenges, we take an initial step towards reliable evaluation guidelines and propose the first human evaluation guideline dataset by collecting annotations of guidelines extracted from existing papers as well as generated via Large Language Models (LLMs). We then introduce a taxonomy of eight vulnerabilities and formulate a principle for composing evaluation guidelines. Furthermore, a method for detecting guideline vulnerabilities has been explored using LLMs, and we offer a set of recommendations to enhance reliability in human evaluation. The annotated human evaluation guideline dataset and code for the vulnerability detection method are publicly available online.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > Dominican Republic (0.04)
- (14 more...)
The ethical situation of DALL-E 2
Hogea, Eduard, Rocafortf, Josem
A hot topic of Artificial Intelligence right now is image generation from prompts. DALL-E 2 is one of the biggest names in this domain, as it allows people to create images from simple text inputs, to even more complicated ones. The company that made this possible, OpenAI, has assured everyone that visited their website that "Our mission is to ensure that artificial general intelligence benefits all humanity". A noble idea in our opinion, that also stood as the motive behind us choosing this subject. This paper analyzes the ethical implications of an AI image generative system, with an emphasis on how society is responding to it, how it probably will and how it should if all the right measures are taken.
- Health & Medicine (1.00)
- Education (1.00)
- Information Technology > Security & Privacy (0.94)